Tensor factorization has proven useful in a wide range of applications, fromsensor array processing to communications, speech and audio signal processing,and machine learning. With few recent exceptions, all tensor factorizationalgorithms were originally developed for centralized, in-memory computation ona single machine; and the few that break away from this mold do not easilyincorporate practically important constraints, such as nonnegativity. A newconstrained tensor factorization framework is proposed in this paper, buildingupon the Alternating Direction method of Multipliers (ADMoM). It is shown thatthis simplifies computations, bypassing the need to solve constrainedoptimization problems in each iteration; and it naturally leads to distributedalgorithms suitable for parallel implementation on regular high-performancecomputing (e.g., mesh) architectures. This opens the door for many emerging bigdata-enabled applications. The methodology is exemplified using nonnegativityas a baseline constraint, but the proposed framework can more-or-less readilyincorporate many other types of constraints. Numerical experiments are veryencouraging, indicating that the ADMoM-based nonnegative tensor factorization(NTF) has high potential as an alternative to state-of-the-art approaches.
展开▼